We spend a lot of time to make our analyses reproducible. A review would allow us to collect some information on whether we are successful with it.
This article used an open-source python repository for its analysis. It is well-suited for reproduction as more literature evolves on the intersection of urban planning and climate change. The adapted code is published alongside the article.
This article was meant to be entirely reproducible, with the data and code published alongside the article. It is however not embedded within a container (e.g. Docker). Will it past the reproducibility test tomorrow? next year? I'm curious.
This papers represents an important milestone in meta-science, as it is one of the first large-scale replication projects outside the social sciences.
Most of the material is available through Jupyter notebooks in GitHub, and it should be easy to reproduce with the help of Binder. With the notebooks, you could experiment with different parameters to the ones analyzed in the paper. It also contains a large dataset of physical parameters of galaxies analysed in this work. We expect this work to be easily reproducible in the steps described in the repository.
1. Because it contains customized numerical methods to implement analytical solutions for an engineering problem relevant to cryogenic storage. This will become increasingly relevant in the future with the increase in the use of liquid hydrogen and LNG as fuel. 2. The storage tank is implemented as a Class and there is an opportunity to understand the object oriented programming mindset of the authors. 3. In the provided Jupyter Notebook, thermodynamic data for nitrogen and methane are provided which enable the users the quick implementation. 4. To reproduce some of the figures and results, the storage tanks need to be modified with inputs available in the paper.
It'll a great helpful to independently check the scientific record I've published, so that errors, if there are any, could be corrected. Also, I will learn how to share the data in a more accessible to other if you could give me feedback.
If all went right, the analysis should be fully reproducible without the need to make any adjustments. The paper aims to find optimal locations for new parkruns, but we were not 100% sure how 'optimal' should be defined. We provide a few examples, but the code was meant to be flexible enough to allow potential decision makers to specify their own, alternative objectives. The spatial data set is also quite interesting and fun to play around with. Cave: The full analysis takes a while to run (~30+ min) and might require >= 8gb ram.
The results of the individual studies (4) could be interpreted in support for the hypothesis, but the meta-analysis suggested that implicit identification was not a useful predictor overall. This conclusion is an important goalpost for future work.
This paper shows a fun and interesting simulation result. I find it (of course) very important that our results are reproducible. In this paper, however, we did not include the exact code for these specific simulations, but the results should be reproducible using the code of our previous paper in PLOS Computational Biology (Van Oers, Rens et al. https://doi.org/10.1371/journal.pcbi.1003774). I am genuinely curious to see if there is sufficient information for the Biophys J paper or if we should have done better. Other people have already successfully built upon the 2014 (PLOS) paper using our code; see e.g., https://journals.aps.org/pre/abstract/10.1103/PhysRevE.97.012408 and https://doi.org/10.1101/701037).